163 research outputs found

    The infancy of the human brain

    Get PDF
    The human infant brain is the only known machine able to master a natural language and develop explicit, symbolic, and communicable systems of knowledge that deliver rich representations of the external world. With the emergence of non-invasive brain imaging, we now have access to the unique neural machinery underlying these early accomplishments. After describing early cognitive capacities in the domains of language and number, we review recent findings that underline the strong continuity between human infants’ and adults’ neural architecture, with notably early hemispheric asymmetries and involvement of frontal areas. Studies of the strengths and limitations of early learning, and of brain dynamics in relation to regional maturational stages, promise to yield a better understanding of the sources of human cognitive achievements.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216

    Common neural basis for phoneme processing in infants and adults

    Get PDF
    Investigating the degree of similarity between infants' and adults' representation of speech is critical to our understanding of infants' ability to acquire language. Phoneme perception plays a crucial role in language processing, and numerous behavioral studies have demonstrated similar capacities in infants and adults, but are these subserved by the same neural substrates or networks? In this article, we review event-related potential (ERP) results obtained in infants during phoneme discrimination tasks and compare them to results from the adult literature. The striking similarities observed both in behavior and ERPs between initial and mature stages suggest a continuity in processing and neural structure. We argue that infants have access at the beginning of life to phonemic representations, which are modified without training or implicit instruction, but by the statistical distributions of speech input in order to converge to the native phonemic categories

    Hearing faces: how the infant brain matches the face it sees with the speech it hears

    Get PDF
    Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory–visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on

    Neural Reuse and the Nature of Evolutionary Constraints

    Get PDF
    In humans, the reuse of neural structure is particularly pronounced at short, task-relevant timescales. Here, an argument is developed for the claim that facts about neural reuse at task-relevant timescales conflict with at least one characterization of neural reuse at an evolutionary timescale. It is then argued that, in order to resolve the conflict, we must conceptualize evolutionary-scale reuse more abstractly than has been generally recognized. The final section of the paper explores the relationship between neural reuse and human nature. It is argued that neural reuse is not well-described as a process that constrains our present cognitive capacities. Instead, it liberates those capacities from the ancestral tethers that might otherwise have constrained them

    Interoperable atlases of the human brain

    Get PDF
    International audienceThe last two decades have seen an unprecedented development of human brain mapping approaches at various spatial and temporal scales. Together, these have provided a large fundus of information on many different as-pects of the human brain including micro-and macrostructural segregation, regional specialization of function, connectivity, and temporal dynamics. Atlases are central in order to integrate such diverse information in a topo-graphically meaningful way. It is noteworthy, that the brain mapping field has been developed along several major lines such as structure vs. function, postmortem vs. in vivo, individual features of the brain vs. population-based aspects, or slow vs. fast dynamics. In order to understand human brain organization, however, it seems inevitable that these different lines are integrated and combined into a multimodal human brain model. To this aim, we held a workshop to determine the constraints of a multi-modal human brain model that are needed to enable (i) an integration of different spatial and temporal scales and data modalities into a common reference system, and (ii) efficient data exchange and analysis. As detailed in this report, to arrive at fully interoperable atlases of the human brain will still require much work at the frontiers of data acquisition, analysis, and represen-tation. Among them, the latter may provide the most challenging task, in particular when it comes to representing features of vastly different scales of space, time and abstraction. The potential benefits of such endeavor, however, clearly outweigh the problems, as only such kind of multi-modal human brain atlas may provide a starting point from which the complex relationships between structure, function, and connectivity may be explored

    Precursors to Natural Grammar Learning: Preliminary Evidence from 4-Month-Old Infants

    Get PDF
    When learning a new language, grammar—although difficult—is very important, as grammatical rules determine the relations between the words in a sentence. There is evidence that very young infants can detect rules determining the relation between neighbouring syllables in short syllable sequences. A critical feature of all natural languages, however, is that many grammatical rules concern the dependency relation between non-neighbouring words or elements in a sentence i.e. between an auxiliary and verb inflection as in is singing. Thus, the issue of when and how children begin to recognize such non-adjacent dependencies is fundamental to our understanding of language acquisition. Here, we use brain potential measures to demonstrate that the ability to recognize dependencies between non-adjacent elements in a novel natural language is observable by the age of 4 months. Brain responses indicate that 4-month-old German infants discriminate between grammatical and ungrammatical dependencies in auditorily presented Italian sentences after only brief exposure to correct sentences of the same type. As the grammatical dependencies are realized by phonologically distinct syllables the present data most likely reflect phonologically based implicit learning mechanisms which can serve as a precursor to later grammar learning

    Functional near infrared spectroscopy (fNIRS) to assess cognitive function in infants in rural Africa

    Get PDF
    Cortical mapping of cognitive function during infancy is poorly understood in low-income countries due to the lack of transportable neuroimaging methods. We have successfully piloted functional near infrared spectroscopy (fNIRS) as a neuroimaging tool in rural Gambia. Four-to-eight month old infants watched videos of Gambian adults perform social movements, while haemodynamic responses were recorded using fNIRS. We found distinct regions of the posterior superior temporal and inferior frontal cortex that evidenced either visual-social activation or vocally selective activation (vocal > non-vocal). The patterns of selective cortical activation in Gambian infants replicated those observed within similar aged infants in the UK. These are the first reported data on the measurement of localized functional brain activity in young infants in Africa and demonstrate the potential that fNIRS offers for field-based neuroimaging research of cognitive function in resource-poor rural communities

    Functional near infrared spectroscopy (fNIRS) to assess cognitive function in infants in rural Africa

    Get PDF
    Cortical mapping of cognitive function during infancy is poorly understood in low-income countries due to the lack of transportable neuroimaging methods. We have successfully piloted functional near infrared spectroscopy (fNIRS) as a neuroimaging tool in rural Gambia. Four-to-eight month old infants watched videos of Gambian adults perform social movements, while haemodynamic responses were recorded using fNIRS. We found distinct regions of the posterior superior temporal and inferior frontal cortex that evidenced either visual-social activation or vocally selective activation (vocal > non-vocal). The patterns of selective cortical activation in Gambian infants replicated those observed within similar aged infants in the UK. These are the first reported data on the measurement of localized functional brain activity in young infants in Africa and demonstrate the potential that fNIRS offers for field-based neuroimaging research of cognitive function in resource-poor rural communities

    Multi-level evidence of an allelic hierarchy of USH2A variants in hearing, auditory processing and speech/language outcomes.

    Get PDF
    Language development builds upon a complex network of interacting subservient systems. It therefore follows that variations in, and subclinical disruptions of, these systems may have secondary effects on emergent language. In this paper, we consider the relationship between genetic variants, hearing, auditory processing and language development. We employ whole genome sequencing in a discovery family to target association and gene x environment interaction analyses in two large population cohorts; the Avon Longitudinal Study of Parents and Children (ALSPAC) and UK10K. These investigations indicate that USH2A variants are associated with altered low-frequency sound perception which, in turn, increases the risk of developmental language disorder. We further show that Ush2a heterozygote mice have low-level hearing impairments, persistent higher-order acoustic processing deficits and altered vocalizations. These findings provide new insights into the complexity of genetic mechanisms serving language development and disorders and the relationships between developmental auditory and neural systems
    • …
    corecore